Conversation
Extract AGENT_MODEL constant in agent.py so tests use the same model as production.
|
@theomonnom @u9g @Topherhindman I'm not sure this is really a good idea. the chat model isn't really appropriate for judging so it's not great to push people into thinking they have to use the same model. the real issue is that the agent in the tests was using a different LLM than in production, which should be fixed by putting the LLM property on the Agent itself. that would be better encapsulation and the right pattern to demonstrate (I saw the node one took the approach of different judge, which is good although i disagree with using a cheaper model for judging. it seems like it would be most valuable to use a bigger model to test the fast chat model you use in-conversation. but it too would benefit from showing the more durable pattern of putting the agent's LLM onto the agent itself.) |
Summary
AGENT_MODELconstant inagent.pyso tests use the same LLM model as productionTest plan
uv run pytest— 3/3 passed